Goto

Collaborating Authors

 Duhok Governorate


Effect of Information Technology on Job Creation to Support Economic: Case Studies of Graduates in Universities (2023-2024) of the KRG of Iraq

arXiv.org Artificial Intelligence

The aim of this study is to assess the impact of information technology (IT) on university graduates in terms of employment development, which will aid in economic issues. This study uses a descriptive research methodology and a quantitative approach to understand variables. The focus of this study is to ascertain how graduates of Kurdistan regional universities might use IT to secure employment and significantly contribute to the nation's economic revival. The sample size was established by the use of judgmental sampling procedure and consisted of 314 people. The researcher prepared the questionnaire to collect data, and then SPSS statistical software, version 22, and Excel 2010 were used to modify, compile, and tabulate the results. The study's outcome showed that information technology is incredibly inventive, has a promising future, and makes life much easier for everyone. It also proved that a deep academic understanding of information technology and its constituent parts helps graduates of Kurdistan Regional University find suitable careers. More importantly, though, anyone looking for work or a means of support will find great benefit from possessing credentials and understanding of IT. The study's final finding was that information technology has actively advanced the country's economy. Not only is IT helping to boost youth employment, but it is also turning into a worthwhile investment for economic growth.


Explainable Artificial Intelligence and Multicollinearity : A Mini Review of Current Approaches

arXiv.org Machine Learning

Explainable Artificial Intelligence (XAI) methods help to understand the internal mechanism of machine learning models and how they reach a specific decision or made a specific action. The list of informative features is one of the most common output of XAI methods. Multicollinearity is one of the big issue that should be considered when XAI generates the explanation in terms of the most informative features in an AI system. No review has been dedicated to investigate the current approaches to handle such significant issue. In this paper, we provide a review of the current state-of-the-art approaches in relation to the XAI in the context of recent advances in dealing with the multicollinearity issue. To do so, we searched in three repositories that are: Web of Science, Scopus and IEEE Xplore to find pertinent published papers. After excluding irrelevant papers, seven papers were considered in the review. In addition, we discuss the current XAI methods and their limitations in dealing with the multicollinearity and suggest future directions.


Commentary on explainable artificial intelligence methods: SHAP and LIME

arXiv.org Artificial Intelligence

These methods help to communicate how the model works with the aim of making machine learning models more transparent and increasing the trust of end-users into their output. SHapley Additive exPlanations (SHAP) and Local Interpretable Model Agnostic Explanation (LIME) are two widely used XAI methods particularly with tabular data. In this commentary piece, we discuss the way the explainability metrics of these two methods are generated and propose a framework for interpretation of their outputs, highlighting their weaknesses and strengths.


Characterizing the contribution of dependent features in XAI methods

arXiv.org Artificial Intelligence

Explainable Artificial Intelligence (XAI) provides tools to help understanding how the machine learning models work and reach a specific outcome. It helps to increase the interpretability of models and makes the models more trustworthy and transparent. In this context, many XAI methods were proposed being SHAP and LIME the most popular. However, the proposed methods assume that used predictors in the machine learning models are independent which in general is not necessarily true. Such assumption casts shadows on the robustness of the XAI outcomes such as the list of informative predictors. Here, we propose a simple, yet useful proxy that modifies the outcome of any XAI feature ranking method allowing to account for the dependency among the predictors. The proposed approach has the advantage of being model-agnostic as well as simple to calculate the impact of each predictor in the model in presence of collinearity.